34 research outputs found

    Attention to the model's face when learning from video modeling examples in adolescents with and without autism spectrum disorder

    Get PDF
    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes

    Task Experience as a Boundary Condition for the Negative Effects of Irrelevant Information on Learning

    Get PDF
    Research on multimedia learning has shown that learning is hampered when a multimedia message includes extraneou

    Do social cues in instructional videos affect attention allocation, perceived cognitive load, and learning outcomes under different visual complexity conditions?

    Get PDF
    Background: There are only few guidelines on how instructional videos should be designed to optimize learning. Recently, the effects of social cues on attention allocation and learning in instructional videos have been investigated. Due to inconsistent results, it has been suggested that the visual complexity of a video influences the effect of social cues on learning. Objectives: Therefore, this study compared the effects of social cues (i.e., gaze & gesture) in low and high visual complexity videos on attention, perceived cognitive load, and learning outcomes. Methods: Participants (N = 71) were allocated to a social cue or no social cue condition and watched both a low and a high visual complexity video. After each video, participants completed a knowledge test. Results and Conclusions: Results showed that participants looked faster at referenced information and had higher learning outcomes in the low visual complexity condition. Social cues did not affect any of the dependent variables, except when including prior knowledge in the analysis: In this exploratory analysis, the inclusion of gaze and gesture cues in the videos did lead to better learning outcomes. Takeaways: Our results show that the visual complexity of instructional videos and prior knowledge are important to take into account in future research on attention and learning from instructional videos

    On the relation between action selection and movement control in 5- to 9-month-old infants

    Get PDF
    Although 5-month-old infants select action modes that are adaptive to the size of the object (i.e., one- or two-handed reaching), it has largely remained unclear whether infants of this age control the ensuing movement to the size of the object (i.e., scaling of the aperture between hands). We examined 5-, 7-, and 9-month-olds’ reaching behaviors to gain more insight into the developmental changes occurring in the visual guidance of action mode selection and movement control, and the relationship between these processes. Infants were presented with a small set of objects (i.e., 2, 3, 7, and 8 cm) and a large set of objects (i.e., 6, 9, 12, and 15 cm). For the first set of objects, it was found that the infants more often performed two-handed reaches for the larger objects based on visual information alone (i.e., before making contact with the object), thus showing adaptive action mode selection relative to object size. Kinematical analyses of the two-handed reaches for the second set of objects revealed that inter-trial variance in aperture between the hands decreased with the approach toward the object, indicating that infants’ reaching is constrained by the object. Subsequent analysis showed that between hand aperture scaled to object size, indicating that visual control of the movement is adjusted to object size in infants as young as 5 months. Individual analyses indicated that the two processes were not dependent and followed distinct developmental trajectories. That is, adaptive selection of an action mode was not a prerequisite for appropriate aperture scaling, and vice versa. These findings are consistent with the idea of two separate and independent visual systems (Milner and Goodale in Neuropsychologia 46:774–785, 2008) during early infancy

    Instructievideos en praktische vaardigheden in het vmbo

    No full text
    Dragen instructievideo’s bij aan het aanleren van praktische technische vaardigheden in de bovenbouw van het vmbo? Instructievideo’s kunnen het aanleren van vaardigheden ondersteunen wanneer de leerling weinig voorkennis of ervaring heeft. Een goede instructievideo legt de vaardigheid stap voor stap uit en doet deze voor. Leren van een voorbeeld kost minder tijd en moeite dan wanneer de leerling probeert de taak uit te voeren zonder eerst een voorbeeld te bekijken. Of een voorbeeld op video effectiever is dan een live demonstratie is niet bekend. Een instructievideo lijkt even effectief als een stapsgewijs uitgeschreven handleiding

    The Role of Affordances in the Evolutionary Process Reconsidered A Niche Construction Perspective

    No full text
    Gibson asserted that affordances are the primary objects of perception. Although this assertion is especially attractive when considered in the context of evolutionary theory, the role that affordances play in the evolution of animals' perceptual and action systems is still unclear. Trying to combine the insights of both Gibson and Darwin, Reed developed a selectionist view in which affordances are conceived as resources that exert selection pressures, giving rise to animals equipped with action systems. Reed's advocacy of selectionism, however, has been criticized on several grounds, among which is an inconsistency with recent trends in evolutionary thinking. Current developments in evolutionary biology indeed ask for a reconsideration of the role of affordances in the evolution of perceptual and action systems. Adopting a niche construction perspective, we reexamine the role of affordances in the evolutionary process. It is argued that affordances and their utilization, destruction, and creation are central elements in evolutionary dynamics. The implications for ecological psychology and evolutionary theory are explored

    Individual differences in learning to perceive length by dynamic touch: Evidence for variation in perceptual learning capacities

    No full text
    Recent studies of perceptual learning have explored and commented on variation in learning trajectories. Although several factors have been suggested to account for this variation, thus far the idea that humans vary in their perceptual learning capacities has received scant attention. In the present experiments we aimed at providing a detailed picture of the variation in this capacity by investigating the perceptual learning trajectories of a considerable number of participants. The learning process was studied using the paradigm of length perception by dynamic touch. The results showed that there are substantial individual differences in the way perceivers respond to feedback. Indeed, after feedback, the participants' perceptual performances diverged. We conclude that humans vary in their perceptual learning capacities. The implications of this finding for recent discussions on variation in perception are explored

    Seeing the instructor's face and gaze in demonstration video examples affects attention allocation but not learning

    No full text
    Although the use of video examples in which an instructor demonstrates how to perform a task has become widespread in online and blended education, specific guidelines for designing such examples to optimize learning are scarce. One design question concerns the presence of the instructor or the instructor's face in the video; because faces attract attention, this might hinder learning by drawing students' attention away from the demonstration. Yet, a recent study suggested that seeing the instructor's face in demonstration video examples may help learning, presumably because the instructor's gaze offers guidance as to what s/he is attending to, which may allow anticipating what s/he is going to do. Using a different task, the main aim of the present study was to see if we could replicate this finding by comparing learning outcomes after observing video examples in which the instructor's face was not visible, or was visible and offered gaze guidance. In addition, we aimed to explore whether the effect –assuming we replicated it– would indeed be due to gaze guidance; we therefore added a third, exploratory condition in which the instructor's face was visible but offered no gaze guidance (i.e., staring straight into the camera). Students' eye movements were recorded in all conditions. We did not replicate prior findings with regard to learning outcomes: learning was neither facilitated nor compromised when seeing the instructor's face. The eye movement data suggested that learners are able to efficiently distribute their attention between the instructor's face and the task he is demonstrating

    Attention to the model's face when learning from video modeling examples in adolescents with and without autism spectrum disorder

    No full text
    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes

    Effects of visual complexity and ambiguity of verbal instructions on target identification

    No full text
    Research has shown that visual complexity and the ambiguity of verbal information affect the speed and accuracy of locating targets during visual search. The higher the visual complexity and description ambiguity, the slower and poorer the target identification performance. Because these factors are seldom studied in combination (even though they regularly co-occur), it is unclear whether they would interact. Therefore, in two experiments, participants viewed images that displayed cartoon-like characters and had to correctly identify a character from a verbal description under conditions of low/high visual complexity and low/high description ambiguity (manipulated within-subjects). Results revealed that high ambiguity descriptions resulted in lower accuracy and slower response times. However, our manipulation of visual complexity did not affect performance or response times either in itself or in interaction with verbal ambiguity. Findings are discussed in terms of theoretical and practical implications, for instance, for multimedia learning
    corecore